Geometrical Initialization, Parametrization and Control of Multilayer Perceptrons: Application to Function Approximation
نویسندگان
چکیده
This paper proposes a new method to reduce training time for neural nets used as function approximators. This method relies on a geometrical control of Multilayer Perceptrons (MLP). A geometrical initialization gives first better starting points for the learning process. A geometrical parametrization achieves then a more stable convergence. During the learning process, a dynamic geometrical control helps to avoid local minima. Finally, simulation results are presented, showing drastic reduction in training time and increase in convergence rate.
منابع مشابه
Geometrical Initialization, Parametrization and Control of Multilayer Perceptrons : Application to Function Approximation 1
| This paper proposes a new method to reduce training time for neural nets used as function approximators. This method relies on a geometrical control of Multilayer Perceptrons (MLP). A geometrical initializa-tion gives rst better starting points for the learning process. A geometrical parametriza-tion achieves then a more stable convergence. During the learning process, a dynamic geometrical c...
متن کاملIDIAP Technical report
Proper initialization is one of the most important prerequisites for fast convergence of feed-forward neural networks like high order and multilayer perceptrons. This publication aims at determining the optimal value of the initial weight v ariance (or range), which is the principal parameter of random weight initialization methods for both types of neural networks. An overview of random weight...
متن کاملClassification, Association and Pattern Completion using Neural Similarity Based Methods
A framework for Similarity-Based Methods (SBMs) includes many classification models as special cases: neural network of the Radial Basis Function Networks type, Feature Space Mapping neurofuzzy networks based on separable transfer functions, Learning Vector Quantization, variants of the k nearest neighbor methods and several new models that may be presented in a network form. Multilayer Percept...
متن کاملOrthogonal least square algorithm applied to the initialization of multi-layer perceptrons
An e cient procedure is proposed for initializing two-layer perceptrons and for determining the optimal number of hidden neurons. This is based on the Orthogonal Least Squares method, which is typical of RBF as well as Wavelet networks. Some experiments are discussed, in which the proposed method is coupled with standard backpropagation training and compared with random initialization.
متن کاملMultilayer Perceptrons with Radial Basis Functions as Value Functions in Reinforcement Learning
Using multilayer perceptrons (MLPs) to approximate the state-action value function in reinforcement learning (RL) algorithms could become a nightmare due to the constant possibility of unlearning past experiences. Moreover, since the target values in the training examples are bootstraps values, this is, estimates of other estimates, the chances to get stuck in a local minimum are increased. The...
متن کامل